153 research outputs found

    Artificial Intelligence: risks, benefits and responsible use

    Get PDF
    We are using Artificial Intelligence-based technologies in an increasing number of systems and tools. Artificial Intelligence can reduce the need for human presence in many dangerous, monotonous and tiring activities, freeing us for less dangerous and more challenging and stimulating activities. At the same time, Artificial Intelligence can increase existing risks and introduce new risks. To avoid or reduce these risks, new Artificial Intelligence algorithms must be developed or used in new and innovative ways, taking into account ethical, social and legal issues.Estamos usando tecnologias baseadas em Inteligência Artificial em um número crescente de sistemas e ferramentas. A Inteligência Artificial pode tornar reduzir a necessidade da presença humana em muitas atividades perigosas, monótonas e cansativas, nos liberando para atividades menos perigosas e mais desafiadoras e estimulantes. Ao mesmo tempo, a Inteligência Artificial pode aumentar riscos existentes e trazer novos riscos. Para evitar ou reduzir esses riscos, é necessário o desenvolvimento de novos algoritmos de Inteligência Artificial, ou seu uso de maneiras novas e inovadoras, levando em consideração questões éticas, sociais e legais

    Viabilidade do aprendizado ativo em máquinas extremas

    Get PDF
    O aprendizado de máquina requer a indução de modelos preditivos. Frequentemente, há dois problemas relacionados com essa tarefa: o custo de rotulação e o tempo de treinamento. Isso é especialmente verdade na presença massiva de dados e em sistemas interativos ou que requeiram resposta imediata. Parte da solução é o uso de aprendizado ativo, que seleciona exemplos a rotular de acordo com um critério de relevância. O complemento da solução é a adoção de um rápido e robusto algoritmo de aprendizado, como as “máquinas extremas”. Neste artigo, várias estratégias de aprendizado ativo são comparadas experimentalmente em diferentes bases de dados com o intuito de preencher uma notável lacuna nas literaturas de aprendizado ativo e máquinas extremas. Os resultados demonstram a viabilidade da união entre as duas áreas.CAPESCNPqFAPES

    O uso seletivo de classificadores binários na solução de problemas multirrótulos

    Get PDF
    Algumas tarefas de classificação permitem que exemplos pertençam a mais de uma classe simultaneamente, uma delas é chamada de classificação multirrótulo. Uma forma simples e eficiente de solucionar problemas desta natureza consiste em transformá-los em vários problemas binários e tratá-los independetemente. Em geral, o mesmo classificador base é usado para induzir os diversos modelos, sem considerar seu viés e as particularidades de cada conjunto binário. Todavia, nesse estudo, investigamos a hipótese de que utilizar o classificador adequado para cada conjunto binário melhora a classificação multirrótulo. Utilizando o método de transformação Binary Relevance, foi adotada uma estratégia de meta-aprendizado para recomendar o classificador adequado para cada subproblema. Os resultados experimentais validam a hipótese investigada e mostram o potencial da abordagem utilizada. Além disso, a estratégia proposta é genérica, de modo que, pode ser aplicada em outros problemas de transformação multirrótulo.FAPES

    Pre-processing for noise detection in gene expression classification data

    Get PDF
    Due to the imprecise nature of biological experiments, biological data is often characterized by the presence of redundant and noisy data. This may be due to errors that occurred during data collection, such as contaminations in laboratorial samples. It is the case of gene expression data, where the equipments and tools currently used frequently produce noisy biological data. Machine Learning algorithms have been successfully used in gene expression data analysis. Although many Machine Learning algorithms can deal with noise, detecting and removing noisy instances from the training data set can help the induction of the target hypothesis. This paper evaluates the use of distance-based pre-processing techniques for noise detection in gene expression data classification problems. This evaluation analyzes the effectiveness of the techniques investigated in removing noisy data, measured by the accuracy obtained by different Machine Learning classifiers over the pre-processed data.São Paulo State Research Foundation (FAPESP)CNP

    Neural networks and genetic algorithms for hierarchical multi-label classification problems

    Get PDF
    In conventional classification problems, each instance of a dataset is associated with just one among two or more classes. However, there are more complex classification problems where instances can be simultaneously classified into classes belonging to two or more paths of a hierarchy. Such a hierarchy can be structured as a tree or a directed acyclic graph. These problems are known in the machine learning literature as hierarchical multi-label classification (HMC) problems. In this\ud Thesis, two methods for hierarchical multi-label classification are proposed and investigated. The first one associates a Multi-Layer Perceptron (MLP) to each hierarchical level, being each MLP responsible for the predictions in its associated level. The method is called HMC-LMLP. The second method induces hierarchical multi-label classification rules using a Genetic Algorithm. The method is called HMC-GA. Experiments using hierarchies structured as trees showed that HMC-LMLP obtained classification performances superior to the state-of-the-art method in the literature, and superior or competitive performances when using graph-structured hierarchies. The HMC-GA method obtained\ud competitive results with other methods of the literature in both tree and graph-structured hierarchies, being able of inducing, in many cases, smaller and in less quantity rules.FAPESP (grant 2009/17401-2)CNP

    ClusterOSS: a new undersampling method for imbalanced learning

    Get PDF
    A dataset is said to be imbalanced when its classes are disproportionately represented in terms of the number of instances they contain. This problem is common in applications such as medical diagnosis of rare diseases, detection of fraudulent calls, signature recognition. In this paper we propose an alternative method for imbalanced learning, which balances the dataset using an undersampling strategy. We show that ClusterOSS outperforms OSS, which is the method ClusterOSS is based on. Moreover, we show that the results can be further improved by combining ClusterOSS with random oversampling.FAPESPCAPESCNP

    Selectively inhibiting learning bias for active sampling

    Get PDF
    Efficient training of machine learning algorithms requires a reliable labeled set from the application domain. Usually, data labeling is a costly process. Therefore, a selective approach is desirable. Active learning has been successfully used to reduce the labeling effort, due to its parsimonious process of querying the labeler. Nevertheless, many active learning strategies are dependent on early predictions made by learning algorithms. This might be a major problem when the learner is still unable to provide reliable information. In this context, agnostic strategies can be convenient, since they spare internal learners - usually favoring exploratory queries. On the other hand, prospective queries could benefit from a learning bias. In this article, we highlight the advantages of the agnostic approach and propose how to explore some of them without foregoing prospection. A simple hybrid strategy and a visualization tool called ranking curves, are proposed as a proof of concept. The tool allowed to see clearly when the presence of a learner was possibly detrimental. Finally, the hybrid strategy was successfully compared to its counterpart in the literature, to pure agnostic strategies and to the usual baseline of the field.CAPESCNPqFAPES

    Meta-learning recommendation of default hyper-parameter values for SVMs in classifications tasks

    Get PDF
    Machine learning algorithms have been investigated in several scenarios, one of them is the data classification. The predictive performance of the models induced by these algorithms is usually strongly affected by the values used for their hyper-parameters. Different approaches to define these values have been proposed, like the use of default values and optimization techniques. Although default values can result in models with good predictive performance, different implementations of the same machine learning algorithms use different default values, leading to models with clearly different predictive performance for the same dataset. Optimization techniques have been used to search for hyper-parameter values able to maximize the predictive performance of induced models for a given dataset, but with the drawback of a high computational cost. A compromise is to use an optimization technique to search for values that are suitable for a wide spectrum of datasets. This paper investigates the use of meta-learning to recommend default values for the induction of Support Vector Machine models for a new classification dataset. We compare the default values suggested by the Weka and LibSVM tools with default values optimized by meta-heuristics on a large range of datasets. This study covers only classification task, but we believe that similar ideas could be used in other related tasks. According to the experimental results, meta-models can accurately predict whether tool suggested or optimized default values should be used.CAPESCNPqSão Paulo Research Foundation (FAPESP) (grant#2012/23114-9
    corecore